Chapter 13

Out of Our Depths


A man has got to know his limitations.

— Clint Eastwood in Magnum Force


Most people are familiar with the idea that some of our ordeals come from a mismatch between the source of our passions in evolutionary history and the goals we set for ourselves today. People gorge themselves in anticipation of a famine that never comes, engage in dangerous liaisons that conceive babies they don't want, and rev up their bodies in response to stressors from which they cannot run away.

What is true for the emotions may also be true for the intellect. Some of our perplexities may come from a mismatch between the purposes for which our cognitive faculties evolved and the purposes to which we put them today. This is obvious enough when it comes to raw data processing. People do not try to multiply six-digit numbers in their heads or remember the phone number of everyone they meet, because they know their minds were not designed for the job. But it is not as obvious when it comes to the way we conceptualize the world. Our minds keep us in touch with aspects of reality — such as objects, animals, and people — that our ancestors dealt with for millions of years. But as science and technology open up new and hidden worlds, our untutored intuitions may find themselves at sea.

What are these intuitions? Many cognitive scientists believe that human reasoning is not accomplished by a single, general-purpose computer in the head. The world is a heterogeneous place, and we are equipped with different kinds of intuitions and logics, each appropriate to one department of reality. These ways of knowing have been called systems, modules, stances, faculties, mental organs, multiple intelligences, and reasoning engines.1 They emerge early in life, are present in every normal person, and appear to be computed in partly distinct sets of networks in the brain. They may be installed by different  {220}  combinations of genes, or they may emerge when brain tissue self-organizes in response to different problems to be solved and different patterns in the sensory input. Most likely they develop by some combination of these forces.

What makes our reasoning faculties different from the departments in a university is that they are not just broad areas of knowledge, analyzed with whatever tools work best. Each faculty is based on a core intuition that was suitable for analyzing the world in which we evolved. Though cognitive scientists have not agreed on a Gray's Anatomy of the mind, here is a tentative but defensible list of cognitive faculties and the core intuitions on which they are based:

• An intuitive physics, which we use to keep track of how objects fall, bounce, and bend. Its core intuition is the concept of the object, which occupies one place, exists for a continuous span of time, and follows laws of motion and force. These are not Newton's laws but something closer to the medieval conception of impetus, an “oomph” that keeps an object in motion and gradually dissipates.2

• An intuitive version of biology or natural history, which we use to understand the living world. Its core intuition is that living things house a hidden essence that gives them their form and powers and drives their growth and bodily functions.3

• An intuitive engineering, which we use to make and understand tools and other artifacts. Its core intuition is that a tool is an object with a purpose — an object designed by a person to achieve a goal.4

• An intuitive psychology, which we use to understand other people. Its core intuition is that other people are not objects or machines but are animated by the invisible entity we call the mind or the soul. Minds contain beliefs and desires and are the immediate cause of behavior.

• A spatial sense, which we use to navigate the world and keep track of where things are. It is based on a dead reckoner, which updates coordinates of the body's location as it moves and turns, and a network of mental maps. Each map is organized by a different reference frame: the eyes, the head, the body, or salient objects and places in the world.5

• A number sense, which we use to think about quantities and amounts. It is based on an ability to register exact quantities for small numbers of objects (one, two, and three) and to make rough relative estimates for larger numbers.6

• A sense of probability, which we use to reason about the likelihood of uncertain events. It is based on the ability to track the relative frequencies of events, that is, the proportion of events of some kind that turn out one way or the other.7  {221} 

• An intuitive economics, which we use to exchange goods and favors. It is based on the concept of reciprocal exchange, in which one party confers a benefit on another and is entitled to an equivalent benefit in return.

• A mental database and logic, which we use to represent ideas and to infer new ideas from old ones. It is based on assertions about what's what, what's where, or who did what to whom, when, where, and why. The assertions are linked in a mind-wide web and can be recombined with logical and causal operators such as and, or, not, all, some, necessary, possible, and cause.8

• Language, which we use to share the ideas from our mental logic. It is based on a mental dictionary of memorized words and a mental grammar of combinatorial rules. The rules organize vowels and consonants into words, words into bigger words and phrases, and phrases into sentences, in such a way that the meaning of the combination can be computed from the meanings of the parts and the way they are arranged.9

The mind also has components for which it is hard to tell where cognition leaves off and emotion begins. These include a system for assessing danger, coupled with the emotion called fear, a system for assessing contamination, coupled with the emotion called disgust, and a moral sense, which is complex enough to deserve a chapter of its own.

These ways of knowing and core intuitions are suitable for the lifestyle of small groups of illiterate, stateless people who live off the land, survive by their wits, and depend on what they can carry. Our ancestors left this lifestyle for a settled existence only a few millennia ago, too recently for evolution to have done much, if anything, to our brains. Conspicuous by their absence are faculties suited to the stunning new understanding of the world wrought by science and technology. For many domains of knowledge, the mind could not have evolved dedicated machinery, the brain and genome show no hints of specialization, and people show no spontaneous intuitive understanding either in the crib or afterward. They include modern physics, cosmology, genetics, evolution, neuroscience, embryology, economics, and mathematics.

It's not just that we have to go to school or read books to learn these subjects. It's that we have no mental tools to grasp them intuitively. We depend on analogies that press an old mental faculty into service, or on jerry-built mental contraptions that wire together bits and pieces of other faculties. Understanding in these domains is likely to be uneven, shallow, and contaminated by primitive intuitions. And that can shape debates in the border disputes in which science and technology make contact with everyday life. The point of this chapter is that together with all the moral, empirical, and political factors that go into these debates, we should add the cognitive factors: the way our  {222}  minds naturally frame issues. Our own cognitive makeup is a missing piece of many puzzles, including education, bioethics, food safety, economics, and human understanding itself.

~

The most obvious arena in which we confront native ways of thinking is the schoolhouse. Any theory of education must be based on a theory of human nature, and in the twentieth century that theory was often the Blank Slate or the Noble Savage.

Traditional education is based in large part on the Blank Slate: children come to school empty and have knowledge deposited in them, to be reproduced later on tests. (Critics of traditional education call this the “savings and loan” model.) The Blank Slate also underlies the common philosophy that the early school-age years are an opportunity zone in which social values are shaped for life. Many schools today use the early grades to instill desirable attitudes toward the environment, gender, sexuality, and ethnic diversity.

Progressive educational practice, for its part, is based on the Noble Savage. As A. S. Neill wrote in his influential book Summerhill, “A child is innately wise and realistic. If left to himself without adult suggestion of any kind, he will develop as far as he is capable of developing.”10 Neill and other progressive theorists of the 1960s and 1970s argued that schools should do away with examinations, grades, curricula, and even books. Though few schools went that far, the movement left a mark on educational practice. In the method of reading instruction known as Whole Language, children are not taught which letter goes with which sound but are immersed in a book-rich environment where reading skills are expected to blossom spontaneously.11 In the philosophy of mathematics instruction known as constructivism, children are not drilled with arithmetic tables but are enjoined to rediscover mathematical truths themselves by solving problems in groups.12 Both methods fare badly when students’ learning is assessed objectively, but advocates of the methods tend to disdain standardized testing.

An understanding of the mind as a complex system shaped by evolution runs against these philosophies. The alternative has emerged from the work of cognitive scientists such as Susan Carey, Howard Gardner, and David Geary.13 Education is neither writing on a blank slate nor allowing the child's nobility to come into flower. Rather, education is a technology that tries to make up for what the human mind is innately bad at. Children don't have to go to school to learn to walk, talk, recognize objects, or remember the personalities of their friends, even though these tasks are much harder than reading, adding, or remembering dates in history. They do have to go to school to learn written language, arithmetic, and science, because those bodies of knowledge and skill were invented too recently for any species-wide knack for them to have evolved.  {223} 

Far from being empty receptacles or universal learners, then, children are equipped with a toolbox of implements for reasoning and learning in particular ways, and those implements must be cleverly recruited to master problems for which they were not designed. That requires not just inserting new facts and skills in children's minds but debugging and disabling old ones. Students cannot learn Newtonian physics until they unlearn their intuitive impetus-based physics.14 They cannot learn modern biology until they unlearn their intuitive biology, which thinks in terms of vital essences. And they cannot learn evolution until they unlearn their intuitive engineering, which attributes design to the intentions of a designer.15

Schooling also requires pupils to expose and reinforce skills that are ordinarily buried in unconscious black boxes. When children learn to read, the vowels and consonants that are seamlessly woven together in speech must be forced into children's awareness before they can associate them with squiggles on a page.16 Effective education may also require co-opting old faculties to deal with new demands. Snatches of language can be pressed into service to do calculation, as when we recall the stanza “Five times five is twenty-five.”17 The logic of grammar can be used to grasp large numbers: the expression four thousand three hundred and fifty-seven has the grammatical structure of an English noun phrase like hat, coat, and mittens. When a student parses the number phrase she can call to mind the mental operation of aggregation, which is related to the mathematical operation of addition.18 Spatial cognition is drafted into understanding mathematical relationships through the use of graphs, which turn data or equations into shapes.19 Intuitive engineering supports the learning of anatomy and physiology (organs are understood as gadgets with functions), and intuitive physics supports the learning of chemistry and biology (stuff, including living stuff, is made out of tiny, bouncy, sticky objects).20

Geary points out a final implication. Because much of the content of education is not cognitively natural, the process of mastering it may not always be easy and pleasant, notwithstanding the mantra that learning is fun. Children may be innately motivated to make friends, acquire status, hone motor skills, and explore the physical world, but they are not necessarily motivated to adapt their cognitive faculties to unnatural tasks like formal mathematics. A family, peer group, and culture that ascribe high status to school achievement may be needed to give a child the motive to persevere toward effortful feats of learning whose rewards are apparent only over the long term.21

~

The layperson's intuitive psychology or “theory of mind” is one of the brain's most striking abilities. We do not treat other people as wind-up dolls but think of them as being animated by minds: nonphysical entities we cannot see or touch but that are as real to us as bodies and objects. Aside from  {224}  allowing us to predict people's behavior from their beliefs and desires, our theory of mind is tied to our ability to empathize and to our conception of life and death. The difference between a dead body and a living one is that a dead body no longer contains the vital force we call a mind. Our theory of mind is the source of the concept of the soul. The ghost in the machine is deeply rooted in our way of thinking about people.

A belief in the soul, in turn, meshes with our moral convictions. The core of morality is the recognition that others have interests as we do — that they “feel want, taste grief, need friends,” as Shakespeare put it — and therefore that they have a right to life, liberty, and the pursuit of their interests. But who are those “others”? We need a boundary that allows us to be callous to rocks and plants but forces us to treat other humans as “persons” that possess inalienable rights. Otherwise, it seems, we would place ourselves on a slippery slope that ends in the disposal of inconvenient people or in grotesque deliberations on the value of individual lives. As Pope John Paul II pointed out, the notion that every human carries infinite value by virtue of possessing a soul would seem to give us that boundary.

Until recently the intuitive concept of the soul served us pretty well. Living people had souls, which come into existence at the moment of conception and leave their bodies when they die. Animals, plants, and inanimate objects do not have souls at all. But science is showing that what we call the soul — the locus of sentience, reason, and will — consists of the information-processing activity of the brain, an organ governed by the laws of biology. In an individual person it comes into existence gradually through the differentiation of tissues growing from a single cell. In the species it came into existence gradually as the forces of evolution modified the brains of simpler animals. And though our concept of souls used to fit pretty well with natural phenomena — a woman was either pregnant or not, a person was either dead or alive — bio-medical research is now presenting us with cases where the two are out of register. These cases are not just scientific curiosities but are intertwined with pressing issues such as contraception, abortion, infanticide, animal rights, cloning, euthanasia, and research involving human embryos, especially the harvesting of stem cells.

In the face of these difficult choices it is tempting to look to biology to find or ratify boundaries such as “when life begins.” But that only highlights the clash between two incommensurable ways of conceiving life and mind. The intuitive and morally useful concept of an immaterial spirit simply cannot be reconciled with the scientific concept of brain activity emerging gradually in ontogeny and phylogeny. No matter where we try to draw the line between life and nonlife, or between mind and nonmind, ambiguous cases pop up to challenge our moral intuitions.

The closest event we can find to a thunderclap marking the entry of a soul  {225}  into the world is the moment of conception. At that instant a new human genome is determined, and we have an entity destined to develop into a unique individual. The Catholic Church and certain other Christian denominations designate conception as the moment of ensoulment and the beginning of life (which, of course, makes abortion a form of murder). But just as a microscope reveals that a straight edge is really ragged, research on human reproduction shows that the “moment of conception” is not a moment at all. Sometimes several sperm penetrate the outer membrane of the egg, and it takes time for the egg to eject the extra chromosomes. What and where is the soul during this interval? Even when a single sperm enters, its genes remain separate from those of the egg for a day or more, and it takes yet another day or so for the newly merged genome to control the cell. So the “moment” of conception is in fact a span of twenty-four to forty-eight hours.22 Nor is the conceptus destined to become a baby. Between two-thirds and three-quarters of them never implant in the uterus and are spontaneously aborted, some because they are genetically defective, others for no discernible reason.

Still, one might say that at whatever point during this interlude the new genome is formed, the specification of a unique new person has come into existence. The soul, by this reasoning, may be identified with the genome. But during the next few days, as the embryo's cells begin to divide, they can split into several embryos, which develop into identical twins, triplets, and so on. Do identical twins share a soul? Did the Dionne quintuplets make do with one-fifth of a soul each? If not, where did the four extra souls come from? Indeed, every cell in the growing embryo is capable, with the right manipulations, of becoming a new embryo that can grow into a child. Does a multicell embryo consist of one soul per cell, and if so, where do the other souls go when the cells lose that ability? And not only can one embryo become two people, but two embryos can become one person. Occasionally two fertilized eggs, which ordinarily would go on to become fraternal twins, merge into a single embryo that develops into a person who is a genetic chimera: some of her cells have one genome, others have another genome. Does her body house two souls?

For that matter, if human cloning ever became possible (and there appears to be no technical obstacle), every cell in a person's body would have the special ability that is supposedly unique to a conceptus, namely developing into a human being. True, the genes in a cheek cell can become a person only with unnatural intervention, but that is just as true for an egg that is fertilized in vitro. Yet no one would deny that children conceived by IVF have souls.

The idea that ensoulment takes place at conception is not only hard to reconcile with biology but does not have the moral superiority credited to it. It implies that we should prosecute users of intrauterine contraceptive devices and the “morning-after pill” for murder, because they prevent the conceptus from implanting. It implies that we should divert medical research from  {226}  curing cancer and heart disease to preventing the spontaneous miscarriages of vast numbers of microscopic conceptuses. It impels us to find surrogate mothers for the large number of embryos left over from IVF that are currently sitting in fertility clinic freezers. It would outlaw research on conception and early embryonic development that promises to reduce infertility, birth defects, and pediatric cancer, and research on stem cells that could lead to treatments for Alzheimer's disease, Parkinson's disease, diabetes, and spinal-cord injuries. And it flouts the key moral intuition that other people are worthy of moral consideration because of their feelings — their ability to love, think, plan, enjoy, and suffer — all of which depend on a functioning nervous system.

The enormous moral costs of equating a person with a conceptus, and the cognitive gymnastics required to maintain that belief in the face of modern biology, can sometimes lead to an agonizing reconsideration of deeply held beliefs. In 2001, Senator Orrin Hatch of Utah broke with his longtime allies in the anti-abortion movement and came out in favor of stem-cell research after studying the science of reproduction and meditating on his Mormon faith. “I have searched my conscience,” he said. “I just cannot equate a child living in the womb, with moving toes and fingers and a beating heart, with an embryo in a freezer.”23

The belief that bodies are invested with souls is not just a product of religious doctrine but embedded in people's psychology and likely to emerge whenever they have not digested the findings of biology. The public reaction to cloning is a case in point. Some people fear that cloning would present us with the option of becoming immortal, others that it could produce an army of obedient zombies, or a source of organs for the original person to harvest when needed. In the recent Arnold Schwarzenegger movie The Sixth Day, clones are called “blanks,” and their DNA gives them only a physical form, not a mind; they acquire a mind when a neural recording of the original person is downloaded into them. When Dolly the sheep was cloned in 1997, the cover of Der Spiegel showed a parade of Claudia Schiffers, Hitlers, and Einsteins, as if being a supermodel, fascist dictator, or scientific genius could be copied along with the DNA.

Clones, in fact, are just identical twins born at different times. If Einstein had a twin, he would not have been a zombie, would not have continued Einstein's stream of consciousness if Einstein had predeceased him, would not have given up his vital organs without a struggle, and probably would have been no Einstein (since intelligence is only partly heritable). The same would be true of a person cloned from a speck of Einstein. The bizarre misconceptions of cloning can be traced to the persistent belief that the body is suffused with a soul. One conception of cloning, which sets off a fear of an army of zombies, blanks, or organ farms, imagines the process to be the duplication of a body without a soul. The other, which sets off fears of a Faustian grab at  {227}  immortality or of a resurrected Hitler, conceives of cloning as duplicating the body together with the soul. This conception may also underlie the longing of some bereaved parents for a dead child to be cloned, as if that would bring the child back to life. In fact, the clone would not only grow up in a different world from the one the dead sibling grew up in, but would have different brain tissue and would traverse a different line of sentient experience.

The discovery that what we call “the person” emerges piecemeal from a gradually developing brain forces us to reframe problems in bioethics. It would have been convenient if biologists had discovered a point at which the brain is fully assembled and is plugged in and turned on for the first time, but that is not how brains work. The nervous system emerges in the embryo as a simple tube and differentiates into a brain and spinal cord. The brain begins to function in the fetus, but it continues to wire itself well into childhood and even adolescence. The demand by both religious and secular ethicists that we identify the “criteria for personhood” assumes that a dividing line in brain development can be found. But any claim that such a line has been sighted leads to moral absurdities.

If we set the boundary for personhood at birth, we should be prepared to allow an abortion minutes before birth, despite the lack of any significant difference between a late-term fetus and a neonate. It seems more reasonable to draw the line at viability. But viability is a continuum that depends on the state of current biomedical technology and on the risks of impairment that parents are willing to tolerate in their child. And it invites the obvious rejoinder: if it is all right to abort a twenty-four-week fetus, then why not the barely distinguishable fetus of twenty-four weeks plus one day? And if that is permissible, why not a fetus of twenty-four weeks plus two days, or three days, and so on until birth? On the other hand, if it is impermissible to abort a fetus the day before its birth, then what about two days before, and three days, and so on, all the way back to conception?

We face the same problem in reverse when considering euthanasia and living wills at the end of life. Most people do not depart this world in a puff of smoke but suffer a gradual and uneven breakdown of the various parts of the brain and body. Many kinds and degrees of existence lie between the living and the dead, and that will become even more true as medical technology improves.

We face the problem again in grappling with demands for animal rights. Activists who grant the right to life to any sentient being must conclude that a hamburger eater is a party to murder and that a rodent exterminator is a perpetrator of mass murder. They must outlaw medical research that would sacrifice a few mice but save a million children from painful deaths (since no one would agree to drafting a few human beings for such experiments, and on this view mice have the rights we ordinarily grant to people). On the other hand,  {228}  an opponent of animal rights who maintains that personhood comes from being a member of Homo sapiens is just a species bigot, no more thoughtful than the race bigots who value the lives of whites more than blacks. After all, other mammals fight to stay alive, appear to experience pleasure, and undergo pain, fear, and stress when their well-being is compromised. The great apes also share our higher pleasures of curiosity and love of kin, and our deeper aches of boredom, loneliness, and grief. Why should those interests be respected for our species but not for others?

Some moral philosophers try to thread a boundary across this treacherous landscape by equating personhood with cognitive traits that humans happen to possess. These include an ability to reflect upon oneself as a continuous locus of consciousness, to form and savor plans for the future, to dread death, and to express a choice not to die.24 At first glance the boundary is appealing because it puts humans on one side and animals and conceptuses on the other. But it also implies that nothing is wrong with killing unwanted newborns, the senile, and the mentally handicapped, who lack the qualifying traits. Almost no one is willing to accept a criterion with those implications.

There is no solution to these dilemmas, because they arise out of a fundamental incommensurability: between our intuitive psychology, with its all-or-none concept of a person or soul, and the brute facts of biology, which tell us that the human brain evolved gradually, develops gradually, and can die gradually. And that means that moral conundrums such as abortion, euthanasia, and animal rights will never be resolved in a decisive and intuitively satisfying way. This does not mean that no policy is defensible and that the whole matter should be left to personal taste, political power, or religious dogma. As the bioethicist Ronald Green has pointed out, it just means we have to reconceptualize the problem: from finding a boundary in nature to choosing a boundary that best trades off the conflicting goods and evils for each policy dilemma.25 We should make decisions in each case that can be practically implemented, that maximize happiness, and that minimize current and future suffering. Many of our current policies are already compromises of this sort: research on animals is permitted but regulated; a late-term fetus is not awarded full legal status as a person but may not be aborted unless it is necessary to protect the mother's life or health. Green notes that the shift from finding boundaries to choosing boundaries is a conceptual revolution of Copernican proportions. But the old conceptualization, which amounts to trying to pinpoint when the ghost enters the machine, is scientifically untenable and has no business guiding policy in the twenty-first century.

The traditional argument against pragmatic, case-by-case decisions is that they lead to slippery slopes. If we allow abortion, we will soon allow infanticide; if we permit research on stem cells, we will bring on a Brave New World of government-engineered humans. But here, I think, the nature of human  {229}  cognition can get us out of the dilemma rather than pushing us into one. A slippery slope assumes that conceptual categories must have crisp boundaries that allow in-or-out decisions, or else anything goes. But that is not how human concepts work. As we have seen, many everyday concepts have fuzzy boundaries, and the mind distinguishes between a fuzzy boundary and no boundary at all. “Adult” and “child” are fuzzy categories, which is why we could raise the drinking age to twenty-one or lower the voting age to eighteen. But that did not put us on a slippery slope in which we eventually raised the drinking age to fifty or lowered the voting age to five. Those policies really would violate our concepts of “child” and “adult,” fuzzy though their boundaries may be. In the same way, we can bring our concepts of life and mind into register with biological reality without necessarily slipping down a slope.

~

When a 1999 cyclone in India left millions of people in danger of starvation, some activists denounced relief societies for distributing a nutritious grain meal because it contained genetically modified varieties of corn and soybeans (varieties that had been eaten without apparent harm in the United States). These activists are also opposed to “golden rice,” a genetically modified variety that could prevent blindness in millions of children in the developing world and alleviate vitamin A deficiency in a quarter of a billion more.26 Other activists have vandalized research facilities at which the safety of genetically modified foods is tested and new varieties are developed. For these people, even the possibility that such foods could be safe is unacceptable.

A 2001 report by the European Union reviewed eighty-one research projects conducted over fifteen years and failed to find any new risks to human health or to the environment posed by genetically modified crops.27 This is no surprise to a biologist. Genetically modified foods are no more dangerous than “natural” foods because they are not fundamentally different from natural foods. Virtually every animal and vegetable sold in a health-food store has been “genetically modified” for millennia by selective breeding and hybridization. The wild ancestor of carrots was a thin, bitter white root; the ancestor of corn had an inch-long, easily shattered cob with a few small, rock-hard kernels. Plants are Darwinian creatures with no particular desire to be eaten, so they did not go out of their way to be tasty, healthy, or easy for us to grow and harvest. On the contrary: they did go out of their way to deter us from eating them, by evolving irritants, toxins, and bitter-tasting compounds.28 So there is nothing especially safe about natural foods. The “natural” method of selective breeding for pest resistance simply increases the concentration of the plant's own poisons; one variety of natural potato had to be withdrawn from the market because it proved to be toxic to people.29 Similarly, natural flavors — defined by one food scientist as “a flavor that's been derived with an out-of-date technology” — are often chemically indistinguishable from their artificial  {230}  counterparts, and when they are distinguishable, sometimes the natural flavor is the more dangerous one. When “natural” almond flavor, benzaldehyde, is derived from peach pits, it is accompanied by traces of cyanide; when it is synthesized as an “artificial flavor,” it is not.30

A blanket fear of all artificial and genetically modified foods is patently irrational on health grounds, and it could make food more expensive and hence less available to the poor. Where do these specious fears come from? Partly they arise from the carcinogen-du-jour school of journalism that uncritically reports any study showing elevated cancer rates in rats fed megadoses of chemicals. But partly they come from an intuition about living things that was first identified by the anthropologist James George Frazer in 1890 and has recently been studied in the lab by Paul Rozin, Susan Gelman, Frank Keil, Scott Atran, and other cognitive scientists.31

People's intuitive biology begins with the concept of an invisible essence residing in living things, which gives them their form and powers. These essentialist beliefs emerge early in childhood, and in traditional cultures they dominate reasoning about plants and animals. Often the intuitions serve people well. They allow preschoolers to deduce that a raccoon that looks like a skunk will have raccoon babies, that a seed taken from an apple and planted with flowers in a pot will produce an apple tree, and that an animal's behavior depends on its innards, not on its appearance. They allow traditional peoples to deduce that different-looking creatures (such as a caterpillar and a butterfly) can belong to the same kind, and they impel them to extract juices and powders from living things and try them as medicines, poisons, and food supplements. They can prevent people from sickening themselves by eating things that have been in contact with infectious substances such as feces, sick people, and rotting meat.32

But intuitive essentialism can also lead people into error.33 Children falsely believe that a child of English-speaking parents will speak English even if brought up in a French-speaking family, and that boys will have short hair and girls will wear dresses even if they are brought up with no other member of their sex from which they can learn those habits. Traditional peoples believe in sympathetic magic, otherwise known as voodoo. They think similar-looking objects have similar powers, so that a ground-up rhinoceros horn is a cure for erectile dysfunction. And they think that animal parts can transmit their powers to anything they mingle with, so that eating or wearing a part of a fierce animal will make one fierce.

Educated Westerners should not feel too smug. Rozin has shown that we have voodoolike intuitions ourselves. Most Americans won't touch a sterilized cockroach, or even a plastic one, and won't drink juice that the roach has touched for even a fraction of a second.34 And even Ivy League students believe that you are what you eat. They judge that a tribe that hunts turtles for their  {231}  meat and wild boar for their bristles will be good swimmers, and that a tribe that hunts turtles for their shells and wild boar for their meat will be tough fighters.35 In his history of biology, Ernst Mayr showed that many biologists originally rejected the theory of natural selection because of their belief that a species was a pure type defined by an essence. They could not wrap their minds around the concept that species are populations of variable individuals and that one can blend into another over evolutionary time.36

In this context, the fear of genetically modified foods no longer seems so strange: it is simply the standard human intuition that every living thing has an essence. Natural foods are thought to have the pure essence of the plant or animal and to carry with them the rejuvenating powers of the pastoral environment in which they grew. Genetically modified foods, or foods containing artificial additives, are thought of as being deliberately laced with a contaminant tainted by its origins in an acrid laboratory or factory. Arguments that invoke genetics, biochemistry, evolution, and risk analysis are likely to fall on deaf ears when pitted against this deep-rooted way of thinking.

Essentialist intuitions are not the only reason that perceptions of danger can be off the mark. Risk analysts have discovered to their bemusement that people's fears are often way out of line with objective hazards. Many people avoid flying, though car travel is eleven times more dangerous. They fear getting eaten by a shark, though they are four hundred times more likely to drown in their bathtub. They clamor for expensive measures to get chloroform and trichloroethylene out of drinking water, though they are hundreds of times more likely to get cancer from a daily peanut butter sandwich (since peanuts can carry a highly carcinogenic mold).37 Some of these risks may be misestimated because they tap into our innate fears of heights, confinement, predation, and poisoning.38 But even when people are presented with objective information about danger, they may not appreciate it because of the way the mind assesses probabilities.

A statement like “The chance of dying of botulism poisoning in a given year is .000001” is virtually incomprehensible. For one thing, magnitudes with lots of zeroes at the beginning or end are beyond the ken of our number sense. The psychologist Paul Slovic and his colleagues found that people are unmoved by a lecture on the hazards of not wearing a seat belt which mentions that a fatal collision occurs once in every 3.5 million person-trips. But they say they will buckle up when the odds are recalculated to show that their lifetime chance of dying in a collision is one percent.39

The other reason for the incomprehensibility of many statistics is that the probability of a single event, such as my dying in a plane crash (as opposed to the frequency of some events relative to others, such as the proportion of all airline passengers who die in crashes), is a genuinely puzzling concept, even to mathematicians. What sense can we make of the odds offered by expert  {232}  bookmakers for particular events, such as that the Archbishop of Canterbury will confirm the second coming within a year (1000 to 1), that a Mr. Braham of Luton, England, will invent a perpetual motion machine (250 to 1), or that Elvis Presley is alive and well (1000 to 1)?40 Either Elvis is alive or he isn't, so what does it mean to say that the probability that he is alive is .001? Similarly, what should we think when aviation safety analysts tell us that on average a single landing in a commercial airliner reduces one's life expectancy by fifteen minutes? When the plane comes down, either my life expectancy will be reduced by a lot more than fifteen minutes or it won't be reduced at all. Some mathematicians say that the probability of a single event is more like a gut feeling of confidence, expressed on a scale of 0 to 1, than a meaningful mathematical quantity.41

The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events.42 That can make recent and memorable events — a plane crash, a shark attack, an anthrax infection — loom larger in one's worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. In hearings for a proposed nuclear waste site, an expert might present a fault tree that lays out the conceivable sequences of events by which radioactivity might escape. For example, erosion, cracks in the bedrock, accidental drilling, or improper sealing might cause the release of radioactivity into groundwater. In turn, groundwater movement, volcanic activity, or an impact of a large meteorite might cause the release of radioactive wastes into the biosphere. Each train of events can be assigned a probability, and the aggregate probability of an accident from all the causes can be estimated. When people hear these analyses, however, they are not reassured but become more fearful than ever — they hadn't realized there are so many ways for something to go wrong! They mentally tabulate the number of disaster scenarios, rather than mentally aggregating the probabilities of the disaster scenarios.43

None of this implies that people are dunces or that “experts” should ram unwanted technologies down their throats. Even with a complete understanding of the risks, reasonable people might choose to forgo certain technological advances. If something is viscerally revolting, a democracy should allow people to reject it whether or not it is “rational” by some criterion that ignores our psychology. Many people would reject vegetables grown in sanitized human waste and would avoid an elevator with a glass floor, not because they believe these things are dangerous but because the thought gives them the willies. If they have the same reaction to eating genetically modified foods or living next to a nuclear power plant, they should have the option of rejecting them, too, as long as they do not try to force their preferences on others or saddle them with the costs.  {233} 

Also, even if technocrats provide reasonable estimates of a risk (which is itself an iffy enterprise), they cannot dictate what level of risk people ought to accept. People might object to a nuclear power plant that has a minuscule risk of a meltdown not because they overestimate the risk but because they feel that the costs of the catastrophe, no matter how remote, are too dreadful. And of course any of these tradeoffs may be unacceptable if people perceive that the benefits would go to the wealthy and powerful while they themselves absorb the risks.

Nonetheless, understanding the difference between our best science and our ancient ways of thinking can only make our individual and collective decisions better informed. It can help scientists and journalists explain a new technology in the face of the most common misunderstandings. And it can help all of us understand the technology so that we can accept or reject it on grounds that we can justify to ourselves and to others.

~

In The Wealth of Nations, Adam Smith wrote that there is “a certain propensity in human nature ... to truck, barter, and exchange one thing for another.” The exchange of goods and favors is a human universal and may have an ancient history. In archaeological sites tens of millennia old, pretty seashells and sharp flints are found hundreds of miles from their sources, which suggests that they got there by networks of trade.44

The anthropologist Alan Fiske has surveyed the ethnographic literature and found that virtually all human transactions fall into four patterns, each with a distinctive psychology.45 The first is Communal Sharing: groups of people, such as the members of a family, share things without keeping track of who gets what. The second is Authority Ranking: dominant people confiscate what they want from lower-ranking ones. But the other two types of transactions are defined by exchanges.

The most common kind of exchange is what Fiske calls Equality Matching. Two people exchange goods or favors at different times, and the traded items are identical or at least highly similar or easily comparable. The trading partners assess their debts by simple addition or subtraction and are satisfied when the favors even out. The partners feel that the exchange binds them in a relationship, and often people will consummate exchanges just to maintain it. For example, in the trading rings of the Pacific Islands, gifts circulate from chief to chief, and the original giver may eventually get his gift back. (Many Americans suspect that this is what happens to Christmas fruitcakes.) When someone violates an Equality Matching relationship by taking a benefit without returning it in kind, the other party feels cheated and may retaliate aggressively. Equality Matching is the only mechanism of trade in most hunter-gatherer societies. Fiske notes that it is supported by a mental model of tit-for-tat reciprocity, and Leda Cosmides and John Tooby have shown that  {234}  this way of thinking comes easily to Americans as well.46 It appears to be the core of our intuitive economics.

Fiske contrasts Equality Matching with a very different system called Market Pricing, the system of rents, prices, wages, and interest rates that underlies modern economies. Market Pricing relies on the mathematics of multiplication, division, fractions, and large numbers, together with the social institutions of money, credit, written contracts, and complex divisions of labor. Market Pricing is absent in hunter-gatherer societies, and we know it played no role in our evolutionary history because it relies on technologies like writing, money, and formal mathematics, which appeared only recently. Even today the exchanges carried out by Market Pricing may involve causal chains that are impossible for any individual to grasp in full. I press some keys to enter characters into this manuscript today and entitle myself to receive some groceries years from now, not because I will barter a copy of The Blank Slate to a banana grower but because of a tangled web of third and fourth and fifth parties (publishers, booksellers, truckers, commodity brokers) that I depend on without fully understanding what they do.

When people have different ideas about which of these four modes of interacting applies to a current relationship, the result can range from blank incomprehension to acute discomfort or outright hostility. Think about a dinner guest offering to pay the host for her meal, a person barking an order to a friend, or an employee helping himself to a shrimp off the boss's plate. Misunderstandings in which one person thinks of a transaction in terms of Equality Matching and another thinks in terms of Market Pricing are even more pervasive and can be even more dangerous. They tap into very different psychologies, one of them intuitive and universal, the other rarefied and learned, and clashes between them have been common in economic history.

Economists refer to “the physical fallacy”: the belief that an object has a true and constant value, as opposed to being worth only what someone is willing to pay for it at a given place and time.47 This is simply the difference between the Equality Matching and Market Pricing mentalities. The physical fallacy may not arise when three chickens are exchanged for one knife, but when the exchanges are mediated by money, credit, and third parties, the fallacy can have ugly consequences. The belief that goods have a “just price” implies that it is avaricious to charge anything higher, and the result has been mandatory pricing schemes in medieval times, communist regimes, and many Third World countries. Such attempts to work around the law of supply and demand have usually led to waste, shortages, and black markets. Another consequence of the physical fallacy is the widespread practice of outlawing interest, which comes from the intuition that it is rapacious to demand additional money from someone who has paid back exactly what he borrowed. Of course, the only reason people borrow at one time and repay it later is that the  {235}  money is worth more to them at the time they borrow it than it will be at the time they repay it. So when regimes enact sweeping usury laws, people who could put money to productive use cannot get it, and everyone's standards of living go down.48

Just as the value of something may change with time, which creates a niche for lenders who move valuable things around in time, so it may change with space, which creates a niche for middlemen who move valuable things around in space. A banana is worth more to me in a store down the street than it is in a warehouse a hundred miles away, so I am willing to pay more to the grocer than I would to the importer — even though by “eliminating the middleman” I could pay less per banana. For similar reasons, the importer is willing to charge the grocer less than he would charge me.

But because lenders and middlemen do not cause tangible objects to come into being, their contributions are difficult to grasp, and they are often thought of as skimmers and parasites. A recurring event in human history is the outbreak of ghettoization, confiscation, expulsion, and mob violence against middlemen, often ethnic minorities who learned to specialize in the middleman niche.49 The Jews in Europe are the most familiar example, but the expatriate Chinese, the Lebanese, the Armenians, and the Gujeratis and Chettyars of India have suffered similar histories of persecution.

One economist in an unusual situation showed how the physical fallacy does not depend on any unique historical circumstance but easily arises from human psychology. He watched the entire syndrome emerge before his eyes when he spent time in a World War II prisoner-of-war camp. Every month the prisoners received identical packages from the Red Cross. A few prisoners circulated through the camp, trading and lending chocolates, cigarettes, and other commodities among prisoners who valued some items more than others or who had used up their own rations before the end of the month. The middlemen made a small profit from each transaction, and as a result they were deeply resented — a microcosm of the tragedy of the middleman minority. The economist wrote: “[The middleman's] function, and his hard work in bringing buyer and seller together, were ignored; profits were not regarded as a reward for labour, but as the result of sharp practises. Despite the fact that his very existence was proof to the contrary, the middleman was held to be redundant.”50 The obvious cure for the tragic shortcomings of human intuition in a high-tech world is education. And this offers priorities for educational policy: to provide students with the cognitive tools that are most important for grasping the modern world and that are most unlike the cognitive tools they are born with. The perilous fallacies we have seen in this chapter, for example, would give high priority to economics, evolutionary biology, and probability and statistics in any high school or college curriculum. Unfortunately, most curricula have barely changed since medieval times, and are barely changeable, because  {236}  no one wants to be the philistine who seems to be saying that it is unimportant to learn a foreign language, or English literature, or trigonometry, or the classics. But no matter how valuable a subject may be, there are only twenty-four hours in a day, and a decision to teach one subject is also a decision not to teach another one. The question is not whether trigonometry is important, but whether it is more important than statistics; not whether an educated person should know the classics, but whether it is more important for an educated person to know the classics than to know elementary economics. In a world whose complexities are constantly challenging our intuitions, these tradeoffs cannot responsibly be avoided.

~

“Our nature is an illimitable space through which the intelligence moves without coming to an end,” wrote the poet Wallace Stevens in 1951.51 The limitlessness of intelligence comes from the power of a combinatorial system. Just as a few notes can combine into any melody and a few characters can combine into any printed text, a few ideas — person, place, thing, cause, change, move, and, or, not — can combine into an illimitable space of thoughts.52 The ability to conceive an unlimited number of new combinations of ideas is the powerhouse of human intelligence and a key to our success as a species. Tens of thousands of years ago our ancestors conceived new sequences of actions that could drive game, extract a poison, treat an illness, or secure an alliance. The modern mind can conceive of a substance as a combination of atoms, the plan for a living thing as the combination of DNA nucleotides, and a relationship among quantities as a combination of mathematical symbols. Language, itself a combinatorial system, allows us to share these intellectual fruits.

The combinatorial powers of the human mind can help explain a paradox about the place of our species on the planet. Two hundred years ago the economist Thomas Malthus (1766–1834) called attention to two enduring features of human nature. One is that “food is necessary for the existence of man.” The other is that “the passion between the sexes is necessary and will remain nearly in its present state.” He famously deduced:

The power of population is indefinitely greater than the power in the earth to produce subsistence for man. Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetic ratio. A slight acquaintance with numbers will show the immensity of the first power in comparison with the second.

Malthus depressingly concluded that an increasing proportion of humanity would starve, and that efforts to aid them would only lead to more misery because the poor would breed children doomed to hunger in their turn. Many recent prophets of gloom reiterated his argument. In 1967 William and Paul  {237}  Paddock wrote a book called Famine 1975! and in 1970 the biologist Paul Ehrlich, author of The Population Bomb, predicted that sixty-five million Americans and four billion other people would starve to death in the 1980s. In 1972 a group of big thinkers known as the Club of Rome predicted that either natural resources would suffer from catastrophic declines in the ensuing decades or that the world would choke in pollutants.

The Malthusian predictions of the 1970s have been disconfirmed. Ehrlich was wrong both about the four billion victims of starvation and about declining resources. In 1980 he bet the economist Julian Simon that five strategic metals would become increasingly scarce by the end of the decade and would thus rise in price. He lost five out of five bets. The famines and shortages never happened, despite increases both in the number of people on Earth (now six billion and counting) and in the amount of energy and resources consumed by each one.53 Horrific famines still occur, of course, but not because of a worldwide discrepancy between the number of mouths and the amount of food. The economist Amartya Sen has shown that they can almost always be traced to short-lived conditions or to political and military upheavals that prevent food from reaching the people who need it.54

The state of our planet is a vital concern, and we need the clearest possible understanding of where the problems lie so as not to misdirect our efforts. The repeated failure of simple Malthusian thinking shows that it cannot be the best way to analyze environmental challenges. Still, Malthus's logic seems impeccable. Where did it go wrong?

The immediate problem with Malthusian prophecies is that they underestimate the effects of technological change in increasing the resources that support a comfortable life.55 In the twentieth century food supplies increased exponentially, not linearly. Farmers grew more crops on a given plot of land. Processors transformed more of the crops into edible food. Trucks, ships, and planes got the food to more people before it spoiled or was eaten by pests. Reserves of oil and minerals increased, rather than decreased, because engineers could find more of them and figure out new ways to get at them.

Many people are reluctant to grant technology this seemingly miraculous role. A technology booster sounds too much like the earnest voiceover in a campy futuristic exhibit at the world's fair. Technology may have bought us a temporary reprieve, one might think, but it is not a source of inexhaustible magic. It cannot refute the laws of mathematics, which pit exponential population growth against finite, or at best arithmetically increasing, resources. Optimism would seem to require a faith that the circle can be squared.

But recently the economist Paul Romer has invoked the combinatorial nature of cognitive information processing to show how the circle might be squared after all.56 He begins by pointing out that human material existence is limited by ideas, not by stuff. People don't need coal or copper wire or paper  {238}  per se; they need ways to heat their homes, communicate with other people, and store information. Those needs don't have to be satisfied by increasing the availability of physical resources. They can be satisfied by using new ideas — recipes, designs, or techniques — to rearrange existing resources to yield more of what we want. For example, petroleum used to be just a contaminant of water wells; then it became a source of fuel, replacing the declining supply of whale oil. Sand was once used to make glass; now it is used to make microchips and optical fiber.

Romer's second point is that ideas are what economists call “nonrival goods.” Rival goods, such as food, fuel, and tools, are made of matter and energy. If one person uses them, others cannot, as we recognize in the saying “You can't eat your cake and have it.” But ideas are made of information, which can be duplicated at negligible cost. A recipe for bread, a blueprint for a building, a technique for growing rice, a formula for a drug, a useful scientific law, or a computer program can be given away without anything being subtracted from the giver. The seemingly magical proliferation of nonrival goods has recently confronted us with new problems concerning intellectual property, as we try to adapt a legal system that was based on owning stuff to the problem of owning information — such as musical recordings — that can easily be shared over the Internet.

The power of nonrival goods may have been a presence throughout human evolutionary history. The anthropologists John Tooby and Irven De-Vore have argued that millions of years ago our ancestors occupied the “cognitive niche” in the world's ecosystem. By evolving mental computations that can model the causal texture of the environment, hominids could play out scenarios in their mind's eye and figure out new ways of exploiting the rocks, plants, and animals around them. Human practical intelligence may have co-evolved with language (which allows know-how to be shared at low cost) and with social cognition (which allows people to cooperate without being cheated), yielding a species that literally lives by the power of ideas.

Romer points out that the combinatorial process of creating new ideas can circumvent the logic of Malthus:

Every generation has perceived the limits to growth that finite resources and undesirable side effects would pose if no new recipes or ideas were discovered. And every generation has underestimated the potential for finding new recipes and ideas. We consistently fail to grasp how many ideas remain to be discovered. The difficulty is the same one we have with compounding. Possibilities do not add up. They multiply.57

For example, a hundred chemical elements, combined serially four at a time and in ten different proportions, can yield 330 billion compounds. If scientists  {239}  evaluated them at a rate of a thousand a day, it would take them a million years to work through the possibilities. The number of ways of assembling instructions into computer programs or parts into machines is equally mind-boggling. At least in principle, the exponential power of human cognition works on the same scale as the growth of the human population, and we can resolve the paradox of the Malthusian disaster that never happened. None of this licenses complacency about our use of natural resources, of course. The fact that the space of possible ideas is staggeringly large does not mean that the solution to a given problem lies in that space or that we will find it by the time we need it. It only means that our understanding of humans’ relation to the material world has to acknowledge not just our bodies and our resources but also our minds.

~

The truism that all good things come with costs as well as benefits applies in full to the combinatorial powers of the human mind. If the mind is a biological organ rather than a window onto reality, there should be truths that are literally inconceivable, and limits to how well we can ever grasp the discoveries of science.

The possibility that we might come to the end of our cognitive rope has been brought home by modern physics. We have every reason to believe that the best theories in physics are true, but they present us with a picture of reality that makes no sense to the intuitions about space, time, and matter that evolved in the brains of middle-sized primates. The strange ideas of physics — for instance, that time came into existence with the Big Bang, that the universe is curved in the fourth dimension and possibly finite, and that a particle may act like a wave — just make our heads hurt the more we ponder them. It's impossible to stop thinking thoughts that are literally incoherent, such as “What was it like before the Big Bang?” or “What lies beyond the edge of the universe?” or “How does the damn particle manage to pass through two slits at the same time?” Even the physicists who discovered the nature of reality claim not to understand their theories. Murray Gell-Mann described quantum mechanics as “that mysterious, confusing discipline which none of us really understands but which we know how to use.”58 Richard Feynman wrote, “I think I can safely say that no one understands quantum mechanics.... Do not keep asking yourself, if you can possibly avoid it, ‘But how can it be like that?'... Nobody knows how it can be like that.”59 In another interview, he added, “If you think you understand quantum theory, you don't understand quantum theory!”60

Our intuitions about life and mind, like our intuitions about matter and space, may have run up against a strange world forged by our best science. We have seen how the concept of life as a magical spirit united with our bodies doesn't get along with our understanding of the mind as the activity of a gradually developing brain. Other intuitions about the mind find themselves just  {240}  as flat-footed in pursuit of the advancing frontier of cognitive neuroscience. We have every reason to believe that consciousness and decision making arise from the electrochemical activity of neural networks in the brain. But how moving molecules should throw off subjective feelings (as opposed to mere intelligent computations) and how they bring about choices that we freely make (as opposed to behavior that is caused) remain deep enigmas to our Pleistocene psyches.

These puzzles have an infuriatingly holistic quality to them. Consciousness and free will seem to suffuse the neurobiological phenomena at every level, and cannot be pinpointed to any combination or interaction among parts. The best analyses from our combinatorial intellects provide no hooks on which we can hang these strange entities, and thinkers seem condemned either to denying their existence or to wallowing in mysticism. For better or worse, our world might always contain a wisp of mystery, and our descendants might endlessly ponder the age-old conundrums of religion and philosophy, which ultimately hinge on concepts of matter and mind.61 Ambrose Bierce's The Devil's Dictionary contains the following entry:

Mind, n. A mysterious form of matter secreted by the brain. Its chief activity consists in the endeavor to ascertain its own nature, the futility of the attempt being due to the fact that it has nothing but itself to know itself with.